The combination of artist-curated scans, and deep implicit functions (IF), is enabling the creation of detailed, clothed, 3D humans from images. However, existing methods are far from perfect. IF-based methods recover free-form geometry but produce disembodied limbs or degenerate shapes for unseen poses or clothes. To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body. What we want is a method that combines the best properties of implicit and explicit methods. To this end, we make two key observations: (1) current networks are better at inferring detailed 2D maps than full-3D surfaces, and (2) a parametric model can be seen as a "canvas" for stitching together detailed surface patches. ECON infers high-fidelity 3D humans even in loose clothes and challenging poses, while having realistic faces and fingers. This goes beyond previous methods. Quantitative, evaluation of the CAPE and Renderpeople datasets shows that ECON is more accurate than the state of the art. Perceptual studies also show that ECON's perceived realism is better by a large margin. Code and models are available for research purposes at https://xiuyuliang.cn/econ
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
目前用于学习现实和可动画3D穿衣服的方法需要带有仔细控制的用户的构成3D扫描或2D图像。相比之下,我们的目标是从不受约束的姿势中只有2D人的人们学习化身。给定一组图像,我们的方法估计来自每个图像的详细3D表面,然后将它们组合成一个可动画的化身。隐式功能非常适合第一个任务,因为他们可以捕获像头发或衣服等细节。然而,目前的方法对各种人类的姿势并不稳健,并且通常会产生破碎或肢体的3D表面,缺少细节或非人形状。问题是这些方法使用对全局姿势敏感的全局特征编码器。为了解决这个问题,我们提出图标(“从正规中获得的隐式衣物人类”),它使用本地特征。图标有两个主要模块,两者都利用SMPL(-X)正文模型。首先,图标Infers详细的衣服 - 人类法线(前/后)在SMPL(-X)法线上。其次,可视性感知隐式表面回归系统产生人占用场的ISO表面。重要的是,在推断时间下,反馈回路在使用推断的布料正线改进SMPL(-X)网格之间交替,然后改装正常。给定多种姿势的多个重建帧,我们使用扫描来从中生成可动画的化身。对Agora和Cape数据集的评估显示,即使具有大量有限的培训数据,图标越优于重建中的最新状态。另外,它对分布外样品进行更强大,例如,野外的姿势/图像和帧外裁剪。图标从野外图像中迈向强大的3D穿上人体重建。这使得能够使用个性化和天然姿势依赖布变形来直接从视频创建化身。
translated by 谷歌翻译
With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译
Achieving multiple genres and long-term choreography sequences from given music is a challenging task, due to the lack of a multi-genre dataset. To tackle this problem,we propose a Multi Art Genre Intelligent Choreography Dataset (MagicDance). The data of MagicDance is captured from professional dancers assisted by motion capture technicians. It has a total of 8 hours 3D motioncapture human dances with paired music, and 16 different dance genres. To the best of our knowledge, MagicDance is the 3D dance dataset with the most genres. In addition, we find that the existing two types of methods (generation-based method and synthesis-based method) can only satisfy one of the diversity and duration, but they can complement to some extent. Based on this observation, we also propose a generation-synthesis choreography network (MagicNet), which cascades a Diffusion-based 3D Diverse Dance fragments Generation Network (3DGNet) and a Genre&Coherent aware Retrieval Module (GCRM). The former can generate various dance fragments from only one music clip. The latter is utilized to select the best dance fragment generated by 3DGNet and switch them into a complete dance according to the genre and coherent matching score. Quantitative and qualitative experiments demonstrate the quality of MagicDance, and the state-of-the-art performance of MagicNet.
translated by 谷歌翻译
This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
translated by 谷歌翻译
We present state advantage weighting for offline reinforcement learning (RL). In contrast to action advantage $A(s,a)$ that we commonly adopt in QSA learning, we leverage state advantage $A(s,s^\prime)$ and QSS learning for offline RL, hence decoupling the action from values. We expect the agent can get to the high-reward state and the action is determined by how the agent can get to that corresponding state. Experiments on D4RL datasets show that our proposed method can achieve remarkable performance against the common baselines. Furthermore, our method shows good generalization capability when transferring from offline to online.
translated by 谷歌翻译
应用于物理工程系统的纯粹数据驱动的深神经网络(DNN)可以推断出违反物理定律的关系,从而导致意外后果。为了应对这一挑战,我们提出了一个基于物理模型的DNN框架,即Phy-Taylor,该框架以物理知识加速了学习合规的表示。 Phy-Taylor框架做出了两个关键的贡献。它引入了一个新的建筑物理兼容神经网络(PHN),并具有新颖的合规机制,我们称{\ em物理学引导的神经网络编辑\/}。 PHN的目的是直接捕获受物质量的启发的非线性,例如动能,势能,电力和空气动力阻力。为此,PHN增强了具有两个关键组成部分的神经网络层:(i)泰勒级数序列扩展的非线性功能捕获物理知识的扩展,以及(ii)缓解噪声影响的抑制器。神经网络编辑机制进一步修改了网络链接和激活功能与物理知识一致。作为扩展,我们还提出了一个自我校正的Phy-Taylor框架,该框架介绍了两个其他功能:(i)基于物理模型的安全关系学习,以及(ii)在违反安全性的情况下自动输出校正。通过实验,我们表明(通过直接表达难以学习的非线性并通过限制依赖性)Phy-Taylor的特征较少的参数和明显加速的训练过程,同时提供增强的模型稳健性和准确性。
translated by 谷歌翻译
我们将点隶属关系引入特征Upsmpling,这一概念描述了每个上采样点的隶属关系到具有语义相似性的本地解码器特征点形成的语义群集。通过重新思考点的隶属关系,我们提出了一种通用公式,用于产生上采样内核。内核不仅鼓励语义平滑度,还鼓励上采样的特征图中的边界清晰度。此类属性对于某些密集的预测任务(例如语义分割)特别有用。我们公式的关键思想是通过比较每个编码器特征点与解码器特征的空间相关局部区域之间的相似性来生成相似性感知的内核。通过这种方式,编码器特征点可以作为提示,以告知UPS采样特征点的语义集群。为了体现该配方,我们进一步实例化了轻巧的增加采样算子,称为相似性 - 吸引点隶属关系(SAPA),并研究其变体。 SAPA会在许多密集的预测任务上邀请一致的性能改进,包括语义分割,对象检测,深度估计和图像垫。代码可用:https://github.com/poppinace/sapa
translated by 谷歌翻译
在对地下地震成像的研究中,求解声波方程是现有模型中的关键成分。随着深度学习的发展,神经网络通过学习输入和方程解决方案之间的映射,特别是波动方程式,将神经网络应用于数值求解部分微分方程,因为如果要花很多时间,传统方法可能会很耗时解决了。以前专注于通过神经网络解决波动方程的工作考虑单个速度模型或多个简单速度模型,这在实践中受到限制。因此,受操作员学习的构想的启发,这项工作利用了傅立叶神经操作员(FNO)在可变速度模型的背景下有效地学习频域地震波场。此外,我们提出了一个与傅立叶神经操作员(PFNO)并行的新框架,以有效地训练基于FNO的求解器,给定多个源位置和频率。数值实验证明了OpenFWI数据集中使用复杂速度模型的FNO和PFNO的高精度。此外,跨数据集泛化测试验证了PFNO适应过分速度模型的。同样,在标签中存在随机噪声的情况下,PFNO具有强大的性能。最后,与传统的有限差异方法相比,PFNO在大规模测试数据集上接受了更高的计算效率。上述优势赋予了基于FNO的求解器的潜力,可以为地震波研究建立强大的模型。
translated by 谷歌翻译